Subset ARMA selection via the adaptive Lasso

نویسندگان

  • Kun Chen
  • Kung-Sik Chan
چکیده

Model selection is a critical aspect of subset autoregressive moving-average (ARMA) modelling. This is commonly done by subset selection methods, which may be computationally intensive and even impractical when the true ARMA orders of the underlying model are high. On the other hand, automatic variable selection methods based on regularization do not directly apply to this problem because the innovation process is latent. To solve this problem, we propose to identify the optimal subset ARMA model by fitting an adaptive Lasso regression of the time series on its lags and the lags of the residuals from a long autoregression fitted to the time series data, where the residuals serve as proxies for the innovations. We show that, under some mild regularity conditions, the proposed method enjoys the oracle properties so that the method identifies the correct subset model with probability approaching 1 with increasing sample size, and that the estimators of the nonzero coefficients are asymptotically normal with the limiting distribution the same as the case when the true zero coefficients are known a priori. We illustrate the new method with simulations and a real application.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Regularized multivariate stochastic regression

In many high dimensional problems, the dependence structure among the variables can be quite complex. An appropriate use of the regularization techniques coupled with other classical statistical methods can often improve estimation and prediction accuracy and facilitate model interpretation, by seeking a parsimonious model representation that involves only the subset of revelent variables. We p...

متن کامل

Model selection via standard error adjusted adaptive lasso

The adaptive lasso is a model selection method shown to be both consistent in variable selection and asymptotically normal in coefficient estimation. The actual variable selection performance of the adaptive lasso depends on the weight used. It turns out that the weight assignment using the OLS estimate (OLS-adaptive lasso) can result in very poor performance when collinearity of the model matr...

متن کامل

Bayesian Adaptive Lasso

We propose the Bayesian adaptive Lasso (BaLasso) for variable selection and coefficient estimation in linear regression. The BaLasso is adaptive to the signal level by adopting different shrinkage for different coefficients. Furthermore, we provide a model selection machinery for the BaLasso by assessing the posterior conditional mode estimates, motivated by the hierarchical Bayesian interpreta...

متن کامل

Estimation and Selection via Absolute Penalized Convex Minimization And Its Multistage Adaptive Applications

The ℓ1-penalized method, or the Lasso, has emerged as an important tool for the analysis of large data sets. Many important results have been obtained for the Lasso in linear regression which have led to a deeper understanding of high-dimensional statistical problems. In this article, we consider a class of weighted ℓ1-penalized estimators for convex loss functions of a general form, including ...

متن کامل

Adaptive Posterior Mode Estimation of a Sparse Sequence for Model Selection

For the problem of estimating a sparse sequence of coefficients of a parametric or nonparametric generalized linear model, posterior mode estimation with a Subbotin(λ, ν) prior achieves thresholding and therefore model selection when ν ∈ [0, 1] for a class of likelihood functions. The proposed estimator also offers a continuum between the (forward/backward) best subset estimator (ν = 0), its ap...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2011